Convergence Properties of Newton ' s Method in the Proximity ofa
نویسنده
چکیده
We analyse the elementwise convergence in the Jacobian of the classical Newton method as the solution of a non-linear system f(x) = 0 is approached. It is shown that each Jacobian element approaches a constant value at a rate determined by the nonlinearity of the relevant function with respect to the appropriate scalar variable. For many practical problems, this means that in the last steps of the Newton iteration, only a low-rank update has to be applied to the Jacobian. The consequences of these results are examined for both direct and iterative solution methods, and appropriate algorithmic modiications stated. Finally, two numerical examples are presented to illustrate the theory.
منابع مشابه
Modify the linear search formula in the BFGS method to achieve global convergence.
<span style="color: #333333; font-family: Calibri, sans-serif; font-size: 13.3333px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: justify; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-dec...
متن کاملAn efficient improvement of the Newton method for solving nonconvex optimization problems
Newton method is one of the most famous numerical methods among the line search methods to minimize functions. It is well known that the search direction and step length play important roles in this class of methods to solve optimization problems. In this investigation, a new modification of the Newton method to solve unconstrained optimization problems is presented. The significant ...
متن کاملOn the Behavior of Damped Quasi-Newton Methods for Unconstrained Optimization
We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...
متن کاملOn the convergence speed of artificial neural networks in the solving of linear systems
Artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. This paper is a scrutiny on the application of diverse learning methods in speed of convergence in neural networks. For this aim, first we introduce a perceptron method based on artificial neural networks which has been applied for solving a non-singula...
متن کاملGlobal convergence of an inexact interior-point method for convex quadratic symmetric cone programming
In this paper, we propose a feasible interior-point method for convex quadratic programming over symmetric cones. The proposed algorithm relaxes the accuracy requirements in the solution of the Newton equation system, by using an inexact Newton direction. Furthermore, we obtain an acceptable level of error in the inexact algorithm on convex quadratic symmetric cone programmin...
متن کامل